专利摘要:
image processing for hdr images. the present invention relates to image coding is described. log luminances in an input hdr image are histogrammed to generate a per-tone map, along with which a global log tone-mapped luminance image is computed. the global log tone-mapped luminance image is scaled down. the log luminances and the log global tone-mapped luminance image generate a log ratio image. multi-scale resolution filtering of the log-ratio image generates a log-ratio-ratio image. the multiple scale ratio image log and the log luminances generate a second log tone-mapped image, which is normalized to output a tone-mapped image based on the scaled-down log global tone-mapped luminance image and the normalized image. the input hdr image and the output tone-mapped image generate a second image in aspect ratio, which is quantized.
公开号:BR112014008513B1
申请号:R112014008513-7
申请日:2013-07-31
公开日:2021-08-17
发明作者:Ankur Shah;Ajit Ninan;Wenhui Jia;Huiming Tong;Qiaoli Yang;Arkady Ten;Gaven Wang
申请人:Dolby Laboratories Licensing Corporation;
IPC主号:
专利说明:

[0001] A portion of the invention in this patent document contains material that is subject to copyright protection. The copyright owners do not object to the facsimile reproduction by any person of the patent document or patent invention as it appears in the patent file or records of the Trademark Patent Office, but otherwise , reserves all copyrights. CROSS REFERENCE TO RELATED ORDERS
[0002] This application claims priority to Provisional Patent Application No. US 61/681,061, filed August 8, 2012, which is incorporated herein by reference in its entirety. TECHNOLOGY
[0003] The present invention relates generally to image processing. More particularly, an embodiment of the present invention relates to image processing for images with high dynamic range (HDR). BACKGROUND
[0004] Some contemporary or legacy digital images adapt to 24-bit formats. These images comprise as much as 24 bits to store both color and brightness information, such as luminance and chrominance data, for each pixel in an image. Such formats preserve sufficient image information to allow the image to be rendered or reproduced by legacy electronic displays and are thus considered to be referenced standards of output. Legacy displays typically have a dynamic range (DR) of three orders of magnitude. As normal human vision can distinguish contrast ratios of up to 1:1000 or more, however, images with significantly higher dynamic ranges can be perceived.
[0005] Developments in modern electronic viewfinder technology allow for image rendering and reproduction in a higher dynamic range, which significantly exceeds the DR of legacy viewfinders. High Dynamic Range (HDR) images more faithfully represent real-world scenes than image formats that conform to the aforementioned output standards. Thus, HDR images can be considered as referred scene. In the context of HDR images and viewers that are capable of rendering them, legacy or other more limited DR images and viewers may be referred to herein as low dynamic range (LDR) images/displays.
[0006] The approaches described in this section are approaches that could be followed, but they are not necessarily approaches that were previously conceived or followed. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art by virtue of their inclusion in this section. Similarly, issues identified in relation to one or more approaches should not be assumed to be recognized in any prior art based on this section, unless otherwise indicated. BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present invention is illustrated by way of example and not by way of limitation in the figures of the attached drawings and in which similar reference numbers refer to similar elements and in which:
[0008] FIG. 1A depicts an exemplary local multiscale tone mapping system in accordance with an embodiment of the present invention;
[0009] FIG. 1B depicts an example image coding process in accordance with an embodiment of the present invention;
[0010] FIG. 2 depicts an exemplary local multi-scale image processing method in accordance with an embodiment of the present invention;
[0011] FIG. 3A and FIG. 3B, respectively, represent an example HCTN block and corresponding multiscale filtering, in accordance with an embodiment of the present invention;
[0012] FIG. 4A, FIG. 4B and FIG. 4C, respectively, depict an example multi-scale filter block, a corresponding example multi-scale filtering implementation, and an example process, in accordance with an embodiment of the present invention;
[0013] FIG. 5 depicts an image processor in exemplary proportion, in accordance with an embodiment of the present invention;
[0014] FIG. 6A and FIG. 6B depict example encoding process data streams for HDR images, in accordance with an embodiment of the present invention;
[0015] FIG. 7 depicts a fusion blend exposure process for displaying an HDR image, in accordance with an embodiment of this invention; and
[0016] FIG. 8A and FIG. 8B depict example JPEG-HDR encoding and decoding processes that support multiple color spaces and broad color hue, in accordance with embodiments of the invention. DESCRIPTION OF EXEMPLARY MODALITIES
[0017] Exemplary modalities, which refer to image processing of HDR images, are described in this document. In the following description, for purposes of explanation, numerous specific details are presented in order to provide a complete understanding of the present invention. It should be understood, however, that the present invention can be practiced without these specific details. In other instances, well-known structures and devices are not described in exhaustive detail in order to avoid unnecessarily obstructing, obscuring or confusing an aspect of the present invention. OVERVIEW - HDR IMAGES
[0018] This overview presents a basic description of some aspects of example embodiments of the present invention. It should be noted that this overview is not an extensive or exhaustive summary of possible aspects of the modality. Furthermore, it should be understood that this overview is not to be understood as an identification of any particularly significant aspects or elements of the possible modality, nor as an outline of any scope of the possible modality, particularly, nor of the invention in general. This overview merely presents some concepts that refer to the possible modality of example in a condensed and simplified format and is to be understood merely as a conceptual preface to a more detailed description of an example modality that follows below.
[0019] An exemplary embodiment of the present invention relates to encoding HDR images. Log luminances in an input HDR image are histogrammed to generate a per-tone map, along with which a global log tone-mapped luminance image is computed. The global log tone-mapped luminance image is scaled down. The log luminances and the log global tone-mapped luminance image generate a log ratio image. Multi-scale resolution filtering of the log-ratio image generates a log-ratio image of multiple scales. The multiple scale ratio image log and log luminances generate a second log tone-mapped image, which is normalized to output a tone-mapped image based on the scaled-down log global tone-mapped luminance image and the normalized image. The input HDR image and the output tone-mapped image generate a second image in aspect ratio, which is quantized.
[0020] An array of active devices (eg transistors) is arranged in a semiconductor matrix. Active devices are configured or operatively interconnected to function as an image encoder. The encoder has a first tone mapper to histogram multiple log luminance values, which are derived from each pixel of a high dynamic range input image. The first tone mapper renders a first image in proportion to the histogram values. A multi-scale filter decimator scales down the first image in proportion and low-pass filters each pixel in it in a horizontal direction and a vertical direction on a recurring basis. Depending on a size of the first image in proportion, the first image in proportion is decimated and filtered in one, two or three levels. An image in corresponding proportion is thus rendered in each of the levels. Each of the images, in corresponding proportion, is recorded in storage (eg memory) which is independent (eg external to) from the IC device. An amplifier at each of the levels weights each of the filtered pixels of each of the images in a corresponding proportion with a scaling factor that corresponds to each level at which the decimator works. A bilinear interpolator increases the ladder of each of the images in proportion weighted to the level that is next subsequent to each of the previous levels. An adder at each of the levels adds each of the ratio-weighted images to the ratio-weighted image from the next previous level. A second tone mapper tone maps a base image and a tone-mapped ratio image thereof, each of which corresponds to the input HDR image, but with a low dynamic range. The base image and base ratio image of it are quantized. The quantized base image and the base aspect ratio image can be output, for example, to a JPEG encoder for compression in the JPEG format.
[0021] Some modern electronic displays essentially render referenced scene HDR images, which have exceeded the DR capability of legacy displays. In the context of display DR capability, the terms "render", "play", "recover", "present", "produce", "restore" and "generate" may be used synonymously and/or interchangeably in present document). One embodiment of the present invention works effectively with both modern and legacy displays. One modality allows capable modern viewers to render HDR images at their substantially full contrast ratio and, backwards compatibility, allows legacy and LDR display devices to render the image within their own, somewhat more limited DR reproduction capabilities. One modality supports such backward compatibility for LDR displays as well as newer HDR display technologies.
[0022] An embodiment represents an HDR image essentially with a base tone-mapped image (such as an instance of an image that has a lower DR than a corresponding HDR case of the image) along with encoded metadata, which provides additional information about the image. Additional information comprises data related to image intensity (eg luminance, luma) and/or data related to color (eg chrominance, chroma). The additional data is related to the difference in DR between an HDR image instance and the corresponding base image case. Thus, a first (eg, legacy) display that has relatively limited DR reproduction capability can use the tone-mapped image to present a normal DR image, for example, according to compression/decompression (codec) standards of existing, established, or popular image.
[0023] An example modality allows normal DR images to be processed in accordance with The JPEG Standard of the Joint Photographic Experts Group of the International Telecommunication Union and the International Electrotechnical Commission, JPEG ISO/IEC 10918-1 ITU-T Rec T.81, which is incorporated by reference, for all purposes, in its entirety as represented in its entirety herein. In addition, a second HDR capable viewer (eg modern) can process the tone-mapped image along with the image metadata to present the HDR image effectively. On the one hand, the tone-mapped image is used to present a normal dynamic range image on a legacy display. On the other hand, additional metadata can be used with the tone-mapped image to generate, retrieve or present an HDR image (for example, by an HDR viewer). One modality uses a tone mapping operator (TMO) to create tone-mapped image instances based on the HDR images.
[0024] Various TMOs, such as Reinhard's global photographic operator, can be used to produce tone-mapped images relatively efficiently. In cases where the computational cost is irrelevant, available, or is otherwise negligible, a two-tailed filter can be used to produce relatively high quality tone-mapped images. Bilateral filtering helps preserve image detail, such as areas of brightness, that typically the computationally more economical Reinhardt operator might miss. Additionally or alternatively, histogram fit operator TMOs and/or gradient domain operator TMOs can be used.
[0025] In one modality, an image format renders HDR images capable and efficiently as well as non-HDR images. Modalities can work with the JPEG format and/or with many other image formats. For example, the modalities can work with one or more of MPEG, AVI, TIFF, BMP, GIF, or other suitable formats that are familiar to those skilled in the fields related to images. One modality works according to the JPEG-HDR image format, which is described in Ward, Greg and Simmons, Maryanne, "Subband Encoding of High Dynamic Range Imagery" in First ACM Symposium on Applied Perception in Graphics and Visualization (APGV), page 83 to 90 (2004); Ward, Greg and Simmons, Maryanne, "JPEG-HDR: Backwards-Compatible, High Dynamic Range Extension to JPEG," in Proceedings of the Thirteenth Color Imaging Conference, pages 283 to 290 (2005); and E. Reinhard, G. Ward, et al. High Dynamic Range Imaging - Acquisition, Display and Image-Based Lighting, pages 105 to 108, Elsevier, MA (2010), which are hereby incorporated in their entirety for all purposes as fully presented herein.
[0026] To display images on a wide variety of image rendering devices, tone mapping operators (TMOs) process input HDR images into base tone-mapped (TM) images. Base TM images can comprise color changes (eg hue changes, color cutouts, artistic aspects, etc.) in relation to the input image. Under some techniques, base TM images are provided to downstream image decoders along with luminance ratios to reconstruct HDR images equivalent to the input HDR images. However, a downstream image decoder might not be able to remove color changes in a reconstructed HDR image, provided it has a base TM image and grayscale luminance ratios. As a result, color changes could remain noticeable in the reconstructed HDR image.
[0027] HDR image coders of an embodiment described herein create not only luminance ratios but also residual color values based on an input HDR image and a base TM image. Luminance ratios and color residuals can be collectively denoted as HDR reconstruction data. Optionally and/or additionally, the luminance ratios are transformed into a logarithmic domain to support a relatively wide range of luminance values. Optionally and/or additionally, the resulting logarithmic luminance ratios and color residuals are quantized. Optionally and/or additionally, the quantized logarithmic ratios and color residuals are stored in a residual image. The quantized logarithmic ratios and color residuals or the afterimage, in some embodiments, are provided with the base image TM to a downstream image decoder. Optionally and/or additionally, parameters related to quantized logarithmic ratios and color residual values (eg range limits, etc.) are also provided with the base TM image.
[0028] A TMO of an embodiment described in this document can freely perform color clippings in color channels for individual pixels with low (black) or high (white) luminance levels. Furthermore, a TMO, as described in this document, is not required to maintain the hue in each pixel. Under techniques described in this document, a user is free to select a TMO based on image content (eg human figures, an indoor image, an outdoor scene, a night view, a sunset, etc.) or applications (eg used in a movie, a poster, a wedding photo, a magazine, etc.). Cropping or color modification can be used deliberately and freely to create artistic aspects of an image. The HDR image encoders and decoders in this document support TMOs deployed by different types of editing software and camera manufacturers that can introduce a wide range of possible color changes. Under the techniques described herein, HDR coders provide color residual values to HDR decoders. HDR decoders, in turn, make use of color residuals to prevent (or minimize) color changes from being present in reconstructed HDR images.
[0029] An embodiment may use bitstreams and/or image files to store and provide base TM images and their corresponding HDR reconstruction data corresponding to downstream image viewers and decoders for decoding and/or rendering. In an example modality, an image format supports TMOs, which can be deployed with various editing software applications and/or camera manufacturers. The example modalities can work with a variety of image formats including, for example, standard JPEG image formats and extended, enhanced, enhanced, or enhanced JPEG-related formats such as JPEG-HDR. Additionally, alternatively or optionally, an example modality may use an image format that is based on or used with a codec/standard that varies in one or more substantial aspects, attributes, objects, encoding specifications or performance parameters from those that can be used with an image format related to JPEG. An exemplary embodiment uses a JPEG-HDR image format to support storage of a base TM image with luminance ratios and color residuals. Additionally, optionally or alternatively, one or any of the base TM image and the afterimage stored in an image file can be compressed. In an example modality, image data compression is performed according to the JPEG standard. Additionally, alternatively, or optionally, an example modality may perform compression according to a pattern that varies in one or more substantial aspects, attributes, objects, encoding specifications, or performance parameters from those that can be used with a format. image related to JPEG.
[0030] As the JPEG format is limited to LDR images, JPEG-HDR essentially comprises an HDR extension compatible inversely with the JPEG format. JPEG-HDR simultaneously supports HDR image rendering on new HDR display devices and non-HDR image rendering (eg LDR) on HDR or non-HDR display devices. JPEG-HDR stores tone-mapped image in standard locations (eg in a bitstream, in a disk format, etc.) as defined in JPEG and stores additional metadata in new locations that can be ignored by non-display devices HDR. Additional metadata can be used along with the tone-mapped image to generate/restore an HDR version of an original HDR image.
[0031] In one embodiment, an HDR JPEG encoder is implanted with or disposed within an Integrated Circuit (IC) device. In one embodiment, devices, circuits and/or mechanisms as described herein comprise a component in a camera or other image recording, rendering or display system, a cellular radiotelephone, a personal digital assistant (PDA) or a device personal, portable, or customer electronics (eg, for pictures, computing, movies, music, information, entertainment, calculation, voice).
[0032] A modality can perform one or more functions as described in Patent Application No. PCT/US2012/033795 filed on April 16, 2012 in accordance with the Patent Cooperation Treaty (PCT) by Wenhui Jia, et al. termed ENCODING, DECODING, AND REPRESENTING HIGH DYNAMIC RANGE IMAGES, or in the Descriptive Report document "JPEG-HDR Encoder and Decoder Algorithm Specification" by Dolby Laboratories, which is incorporated herein for all purposes, a copy of which is attached thereto descriptive report (as filed) as Annex "A".
[0033] A modality can perform one or more functions as described in Patent Application No. PCT/US2012/027267, filed on March 1, 2012, according to the PCT by Gregory John Ward called LOCAL MULTI-SCALE TONE MAPPING OPERATOR, which is incorporated herein for all purposes by way of reference in its entirety.
[0034] Various modifications to the preferred modalities and to the generic principles and resources described in this document will be readily apparent to those skilled in the art. Thus, the invention is not intended to be limited to the embodiments shown, but is to conform to the broadest scope consistent with the principles and features described herein. EXAMPLE HDR JPEG ENCODER
[0035] In one embodiment, an HDR JPEG encoder is implanted with an Integrated Circuit (IC) device, which is commonly referred to as a chip. For example, the encoder can be arranged inside the IC device. The IC device can be deployed as an application-specific IC device (ASIC), a digital signal processor (DSP), a field-programmable gate array (FPGA), and/or a graphics processor. The IC device can be deployed as a system on a chip (SOC) with an ASIC or with one or more configurable or programmable devices such as a microprocessor, a programmable logic device (PLD), an array of field-programmable gates (FPGA) or a microcontroller.
[0036] The IC device comprises an array of active device components such as transistors, which are arranged within a semiconductor matrix. Active device components are ordered, arranged, configured and/or programmed to function as modules, registers, provision buffers, logical gates, logical and computation units (eg arithmetic/floating point) or perform other such operations that may be consistent with JPEG HDR encoding. The active components of the array are interconnected with an at least partially conductive routing fabric, such as a signal/road network, an address trellis/word lines or the like, which are arranged within the matrix to allow electrical/electronic exchange of signals and data between the active device components and the various functional modules that are formed with them. Active components are operationally addressable (for example, through nodes or portions of the routing fabric) with an interface that is at least partially conductive, which allows electrical, electronic and/or communicative coupling with external energy, data and signal sources to the IC device.
[0037] An example HDR JPEG encoder modality is described herein as deployed with an ASIC. For clarity, simplicity, brevity, and consistency, the example ASIC deployment described in this document also represents configurable and programmable IC deployments. FIG. 1B depicts an example HDR JPEG encoder in accordance with an embodiment of the present invention.
[0038] An example encoder 10 is implemented with an ASIC. Encoder 10 receives input images via an advanced high-performance bus interface (AHB). Pre_TM pretone mapping converts input image data into a format that is useful for tone mapping. PRE_TM performs chroma upsampling, for example, from a 4:2:2 chroma sampling format to a 4:4:4 format. PRE_TM converts the input image color space (eg YCbCr) to a tristimulus color space, such as RGB. Pre_TM performs a reverse (inverse) gamma (Y) correction on the image converted to RGB.
[0039] The encoder 10 performs a tone mapping function, which generates a tone-mapped base image from an input HDR image. Encoder 10 can be deployed to handle HDR images that are input in a variety of formats, such as the example input formats shown in Table 1, below. TABLE 1

[0040] The tone mapping function comprises a histogram adjusted multi-scale tone mapping operator (HAMS-TMO), which uses limited contrast adaptive histogram equalization (CLAHE) to perform a tone map normalization function on the input HDR images. The normalization function can be implemented with CLAHE histogram tone map normalization (HCTN) on input images. HAMS-TMO HCTN outputs a normalized tone-mapped base image in a 12-bit linear RGB format. Exemplary HAMS-TMO HCTN modalities are described below (FIG. 2 and FIG. 3). An RI_Proc aspect ratio image processor can compute and process one or more aspect ratio images from the normalized tone-mapped base image.
[0041] By HCTN processing of HAMS-TMO HCTN, Post_TM post-tone mapping restores gamma correction on the normalized 12-bit RGB image and generates an 8-bit RGB image with it. Post_TM is responsible for sending the tone-mapped base image to the JPEG encoder for compression. Post_TM converts the RGB color space of the gamma-corrected 8-bit image to a YCbCr image with a JPEG-compatible color format (for example, 4:2:2 or 4:2:0). For example, Post-TMO can comprise the following operations: Gamma encoding (where a 12-bit RGB input is translated into an 8-bit input, typically via a user-defined lookup table), RGB color transform to YCbCr (eg through 3x3 color matrix transformation) and 4:4:4 transformation to 4:2:2 or 4:2:0 through appropriate subsampling of the chroma color planes. Encoder 10 may comprise more than one Post_TM sub-block post tone mapping module. For example, encoder 10 can be deployed with three (3) Post_TM sub-blocks.
[0042] FIG. 1B depicts an example image coding method 100, in accordance with an embodiment of the present invention. In one embodiment, encoder 10 functions as described with respect to process 100 upon receiving or accessing an input HDR image. In step 101, a histogram is computed based on the logarithmic (log) luminance values of the pixels in the input HDR image. A tone map curve is generated in step 102, based on the computed histogram. In step 103, a logarithmic global tone-mapped luminance image is computed based on the logarithmic luminance pixel values of the input HDR image and the tone map curve.
[0043] In step 104, the log global tone-mapped luminance image is downsampled (e.g., decimated vertically and horizontally) to compute a reduced-scale log global tone-mapped luminance image. A log-ratio image is computed in step 105 based on the reduced-scale log global tone-mapped luminance image and the log luminance pixel values of the input HDR image. In step 106, multiple scale filtering is performed on the log ratio image to generate a log multiple scale ratio image. A second log tone-mapped image is generated in step 107 based on the ratio image at multiple log scales and the log luminance pixel values of the input HDR image.
[0044] In step 108, the second log tone-mapped image is normalized to change the range of pixel intensity values and achieve contrast stretch and an output tone-mapped image is generated based on the same and the luminance image mapped by global tone log at reduced scale. A second aspect ratio image is generated in step 109 based on the output tone-mapped image and the input HDR image. In step 110, the second image in proportion is quantized. In step 111, the output tone-mapped image and the second quantized ratio are output to a JPEG encoder. At each step of the process of example 100, the generated global tone-mapped images and images in aspect ratio can be written to and/or read from an external memory, for example, via the interfaces of the encoder of example 10.
[0045] FIG. 2 depicts a multiscale tone mapper adjusted by example histogram, in accordance with an embodiment of the present invention. In one embodiment, histogram-adjusted multiscale tone mapper 200 implements the HCTN function of HAMS-TMO, described above (FIG. 1). HAMS-TMO 200 receives an HDR image in a tristimulus (eg RGB) or other color space (eg YCbCr). The luminance module (201) computes 16-bit Y luminance values on the input HDR RGB image. A logarithmic luminance module LOG (202) transforms the luminance value Y from a linear domain into a logarithmic domain. The LOG 202 module implements the transformation into base 2 logarithms of the luminance values Y, 'logY'.
[0046] By transforming 16-bit linear luminance values, the LOG module saves the resulting base 2 logarithm values (log 2) logY as Q4.12 data (for example, 4 bits before the imaginary binary point and 12 bits after it). For example, logarithms comprise integer and factional components. Thus, one modality separates the entire logY component and the fractional logY component to implement the base 2 log 2Y logarithm. The integer portion is computed according to the number of left shifts in normalization and the fractional 8 bits are indexed to a LUT lookup table, for example, as shown in the example pseudocode of Table 2, below. TABLE 2


[0047] A HIST histogram (203) comprising 512 binaries is constructed from the fractional logY component. Fractional log luminance values are treated as 16-bit integer values. Thus, the interval between binaries comprises 65,536/512 = 128. HAMS-TMO 200 then performs CLAHE adjustments on the histogram. The dynamic range is computed from the histogram, for example, according to the example pseudocode shown in Table 3A, below. TABLE 3A


[0048] The output dynamic range (ODR) is configurable with default value 3.5 in the natural logarithmic domain (base-e), which translates to a base 2 value of five (5). A histogram clipping factor 'cf is computed, for example, by: cf = ((odr*(bmax-bmin+1))<<12) / (drin); and the histogram can be fitted with it for multiple iterations, for example, in accordance with the pseudocode shown in Table 3B, below. TABLE 3B

[0049] A cumulative histogram is computed from the fitted histogram and mapped to a 12-bit log domain in a Q4.12 data format, for example, according to the pseudocode shown in Table 3C, below. TABLE 3C

[0050] Such a CLAHE histogram equalization generates a mapping curve, which is deployed as a global tone mapping operator for the logY image. As the mapping curve comprises the 512 binaries, linear interpolation is computed on the luminance values in each of the 512 binaries, for example, in accordance with the pseudocode shown in Table 3D, below. 3D TABLE


[0051] The CLAHE mapping output comprises a logY image (204) in Q4.12 format. In one embodiment, HAMS-TMO 200 is deployed with a block that performs a histogram CLAHE (Adaptive Histogram Equalization with Limited Contrast) tonemap normalization function.
[0052] FIG. 3A and FIG. 3B, respectively, depict an example histogram CLAHE tone map normalization block (HCTN) 30 and a corresponding example HCTN process flow 300, in accordance with an embodiment of the present invention. The HCTN 30 block can be deployed to support images of 25 million pixels or more. Upon receipt of an input image in a tristimulus (eg RGB) or other color space (eg YCbCr), HCTN 30 computes a luminance value Y from it (process step 301). In step 302, the Y values are exported to the shared logic for computing the 'logY' logarithm values corresponding to it, which are returned to the HCTN block 30. In step 303, a histogram is computed based on the logY values and stored in the 'ht0' table. Upon counting all input image pixels, limited contrast adaptive histogram equalization (CLAHE) is computed to normalize the ht0 histogram values in step 304.
[0053] Upon counting all input image pixels, the limited contrast adaptive histogram equalization (CLAHE) is computed to normalize the ht0 histogram values in step 304. In step 305, the logY values in temporary storage are interpolated and a logarithmic tone-mapped image 'logYtm' is thus generated. For example, the color mapping curve is plotted over 512 binaries of the histogram. Thus, linear interpolation is computed on the luminance values in each of the 512 binaries to achieve logYtm. In step 306, a logarithmic ratio image 'logRI' is computed from the logY values and the logYtm image with a subtractive function that is performed over the logarithmic domain: logRI = logYtm - logY. In step 307, the Log Y histogram is then clipped. In step 308, after filtering on multiple scales, the tone-mapped logY values are normalized to linear luminance values Y'. In step 309, an optional curve function can be applied over the linear tone-mapped Y' values to output a final tone-mapped image.
[0054] FIG. 4A, FIG. 4B and FIG. 4C, respectively, depict an example multi-scale filter (MSF) block 4000, a corresponding example multi-scale filtering implementation, and an example process 400, in accordance with an embodiment of the present invention. Like the HCTN block 30 (FIG. 3A), MSF can be deployed to support images of 25 million pixels or more. The MSF 4000 from above an image in log proportion of IBI input by a pre-computed factor (eg 8) over its horizontal dimension and over its vertical dimension. The MSF 4000 filters through a low-pass filter each pixel that comprises the image decimated over multiples, for example, seven (7), strokes. The low-pass downscaled image may subsequently be scaled up, for example, by the same computed factor with which it was scaled down previously.
[0055] The MSF 4000 pre-computes an "msn" number of stages over which the stay image is scaled based on its original input size, for example, according to the example deployment equation:
[0056] msn=floor(log8min(width,height))+1=floor(log2min(width,height)/3)+1. The MSF 4000 can be deployed to decimate an image to IBI input log ratio by a factor of up to eight (8) over its horizontal dimension and over its vertical dimension over each of the four (4) stages for a total of 64 in each dimension.
[0057] Thus, in one embodiment, as depicted in FIG. 4B, a filtering deployment comprises four (4) stages 40. 41, 42 and 43. Each of stages 40 to 43 decimates the image into a vertical dimension and a horizontal dimension by a factor of eight, so that the size of the image is scaled down by a factor of 8 = 64 and so the MSF 4000 decimates the image by a total factor of 64. Thus, at each stage, the log ratio image is scaled down by a factor of eight. This downscaling by a factor of eight is repeated at each of the msn levels (eg, stage), for example, according to the pseudocode shown in Table 4, below. TABLE 4

[0058] At each stage, a low step filtering of 7 touches can be performed on each pixel of the decimated images. A modality is implemented in which the decimated images are each filtered in a horizontal dimension that corresponds to their first spatial orientation and then filtered in the vertical direction, which is spatially perpendicular to the first orientation. The various scaled images are aligned on their boundaries, for example, with padding such as mirror span.
[0059] An amplifier applies an "Alpha" weighting factor to the image in proportion at each stage. For each of the stages "k" where k comprises an integer in the range from zero to msn minus one (k = 0, 1, ..., msn-1), a modality computes the weighting factor Alpha (A) according to: Ak= 2*(k+1)/(msn(msn+1)). The weights are added to the unit. A modality can be implemented in which the weighting factor is computed as 2*(msn-(k-1)+1)/msn*(msn+1) or as 1/msn.
[0060] The scale-up is performed (msn-1) times on the images in proportion filtered in reduced scale. The log-weighted ratio image is added to the scaled-up images at each stage. One modality implements scale-up with interpolation (eg, bilinear interpolation) of lower resolution images from the previous stage, for example, using four (4) points in the spatial corners of the image and interpolation over the horizontal and vertical dimensions of it to build a block with ascending sampling.
[0061] Stage 401 downscales and filters the input RO image and passes the first image in RI proportion to stage 402. Similarly, stage 402 and each of stages 403 to 407, inclusive, pass images in proportion to scaled low passes, which are ordinarily subsequent to the image in proportion passed to it by each of its respective previous stages, to its respective next stage. The weighted proportion image from each stage is added to the enlarged-scale image from the next stage.
[0062] The MSF 4000 generates tone-mapped luma, luminance or other intensity-related tone-mapped values that are recorded with configuration registers in external memory through a register interface.
[0063] In one embodiment, the MSF 4000 and/or deployment 400 operates in accordance with one or more steps of a multi-scale resolution filtering process of example 400. The process of example 400 is described below with reference to FIG. 4B and the flowchart depicted in FIG. 4C. Process 400 begins processing an image in log RO ratio (e.g., generated in step 105 in FIG. IB) by progressively downscaling the image over each of levels 41, 42, and 43. At each downscaling level , the image is progressively decimated in a vertical direction and in a horizontal direction.
[0064] In step 401, the image in log RO ratio is scaled down vertically and horizontally by a factor of "N", where N comprises a positive integer, for example eight (8). An RI first level reduced scale log proportion image is thus generated. The first-level RI scaled-down log-ratio image is then decimated by the factor N in step 402 to generate a second-level scaled-down log-ratio image R2. The second-level downscaled log-ratio image R2 is then decimated by the factor N in step 403 to generate a third-level downscaled log-ratio image R3. In an example modality, the reduced-scale image output of each level is low-passed. In an example modality, not all levels need to be used.
[0065] In step 404, the pixel values of the third-level R3 reduced-scale log-ratio image are scaled with third-level scale factors (eg Alpha[3]) to generate a third-level weighted-ratio image. R'3 level. In step 405, the pixel values of the R2 second-level scaled down-scale log-ratio image are scaled with second-level scale factors (eg, Alpha[2]) to generate a second-level scaled-weighted ratio image. R'2. In step 406, the third-level scale-weighted ratio image R'3 is scaled up by the N factor and is summed with the second-level scale-weighted ratio image R'2 to generate the second-level scaled-up log ratio image. level R2.
[0066] In step 407, the first-level R'I scaled-down ratio image is scaled with first-level scale factors (eg Alpha[1]) to generate a first-level R'I log-weighted ratio image 'I. In step 408, the second-level scaled-up ratio image R2 is scaled up by the factor N and is summed to the second-level scaled-scale weighted ratio image R'2 to generate the first-level scaled-up log ratio image of the first. R1 level. In step 409, the RO log ratio image is scaled with zero-level scaling factors (for example, Alpha[0]) to generate a zero-level log-weighted ratio R'O image. In step 410, the first-level scaled-up log ratio image RI is scaled up by the factor N and is summed with the zero-level scaled scale weighted ratio image R'O to generate a log multiple scale ratio image. R1. Example process steps 400 may be optional.
[0067] FIG. 5 depicts an image processor in aspect ratio 500, in accordance with an embodiment of the present invention. One modality deploys RI_Proc (FIG. 1) with the image processor at aspect ratio 500. The image processor at aspect ratio 500 receives input images from TMO 200 (FIG. 2). A luminance ratio is computed from the Y luminance values from the original HDR input image and luminance values from the tone-mapped image. Minimum and maximum values are computed over the entire etch, which are used to quantize the log logY luminance values and the CbCr DiffCbCr chrominance values of the difference image.
[0068] The logY and DiffCbCr values are saved/written in external memory, for example, through an Advanced Extensible Interface (AXI) Advanced Microcontroller Bus Architecture interface or a similarly capable interface. Externally saved/stored values are read/loaded back via AXI as time quantized. A linear feedback offset (LFSR) register generates random numeric values for dithering over the logY channel during quantization. RI_Proc 500 outputs DiffCbCr and logY values quantized to a JPEG encoder, which can output image in JPEG format that matches the input image.
[0069] FIG. 6A and FIG. 6B, respectively, depict an example encoding process 60 and an example data stream timeline 600 thereof, in accordance with an embodiment of the present invention. Upon receipt (61) of an HDR input gravure, a histogram and a reduced-scale LogY image LogY1 is generated in step 62. The histogram is normalized. In stream 600, the JPEG-HDR encoder core (eg encoder 100; FIG. 1) reads the entire HDR input image. The encoder generates the histogram based on the LogY values of the input image pixels, equalizes the histogram, and writes LogY1 to a small scaled image temporary storage Buff_Log1. In one modality, the histogram is equalized using CLAHE.
[0070] In step 63, filtering at multiple scales is performed, which generates the scaling factor per real pixel for use in tone mapping. In step 64, the per-pixel scaling factor is applied to each pixel. The tone-mapped base image is converted to 8-bit gamma-encoded YCbCr4:2:2/4:2:0/4:4:4 and can be sent to a JPEG encoder, which writes a compressed base image to memory external. The original, tone-mapped RGB data is processed to generate the raw prequantized aspect ratio image, which is also written to external memory. In step 65, the raw-ratio image is read back from external memory and quantized. The image in quantized proportion can be output (66) to the JPEG encoder and compressed with it, EXPOSURE BASED BY MULTIPLE REGIONS, EXAMPLE WEIGHTED FOR HDR IMAGE
[0071] A traditional inexpensive client display device, such as a smartphone, computer monitor and the like, may not be able to display the full dynamic range of a JPEG-HDR image. In such cases, the display will typically output a tone-mapped Low Dynamic Range (LDR) version of the corresponding HDR image. This tone-mapped image is typically automatically generated by the camera without any user input, so it cannot capture the photographer's intent.
[0072] In some embodiments, a user can scroll through the HDR engraving using the device's user interface, such as a touch screen, computer mouse, scroll bars, and the like. In this case, user may be able to see part of the image in the full dynamic range, but the rest of the image may be displayed too dark or too bright. However, a user may want to view details in multiple parts of the image. Thus, it could be beneficial to allow users to adjust the exposure of an HDR image based on regions of interest.
[0073] In one modality, the exposure of the final HDR image can take into account two or more regions of interest selected by the user. These regions can be selected either before image capture (eg with a camera or other capture device) or after image capture (eg while displaying the corresponding LDR image). In some modalities with a touchscreen interface (for example, an iPhone or an iPad), these regions can represent pixels of relatively identical luminance that surround one or more pixels touched by the user. In other modalities, a user can use alternative interfaces to select these regions, such as a computer mouse, trackball, keyboard, and the like. In still other modalities, these regions can be automatically selected based on pre-selected user preferences (eg faces, animals, text, etc.).
[0074] In one modality, an area surrounding a first touch point can be set to an optimal first exposure range (eg 18% gray). Then, for a second touch point, a second optimal exposure range is computed. The final image can be displayed using a final exposure range weighted by the first and second exposure ranges. This places both the first and second touch points within the viewfinder's dynamic range while blending the rest of the resulting print. Any number of touch points can be identified, such as 3, 4 or N. Weighting factors can be an equal average, mean, median, proportional, linear, non-linear and/or extreme (maximum/minimum) weighting calculation . In a specific modality, the technique can be undone by a user command (eg undo button).
[0075] As depicted in FIG. 7, in another embodiment, a resulting print can be generated using a melt blending process. In this process, for each selected point of interest (710), the process generates a corresponding exposed LDR image (720). Given N such exposures (or LDR images) created from the original HDR image, a modality can create a merged image by properly blending all N exposures into a single output image (730). An example of such a fusion process can be deployed using techniques described in "Exposure Fusion," by T. Mertens. et al., 15th Pacific Conference on Computer Graphics and Applications (Pacific Graphics, 2007), at pages 382 to 390, incorporated by reference in its entirety as represented in its entirety herein. IMAGE QUANTIZATION IN ADAPTIVE EXAMPLE PROPORTION
[0076] Given an HDR luminance image (Yh) and its tone-mapped representation (Yt), as described above, an image in YR ratio can be expressed as

[0077] The dynamic range of the image in proportion can be compressed by applying an inverse function to it, such as a logarithmic function or a square root function. So, in a modality where the log function is applied,

[0078] The image in log ratio (log(YR)) can also be further quantized to render an image in 8 bit ratio:

[0079] Since the original aspect ratio image comprises pixel values represented in high precision or dynamic range (for example, with the use of floating point numbers), quantizing the image in aspect ratio at 8-bit pixel value will generate rounded errors which cannot be recovered when applying the inverse quantization function. This error can affect the accuracy of image encoding and may limit the dynamic range of images that can be encoded using the JPEG-HDR format.
[0080] In one modality, the above log function is thus replaced by an arbitrary inverse function "F". Given F, the quantized 8-bit aspect ratio image can be expressed as

[0081] This allows decoders to recover the image in original proportion by:

[0082] where YR denotes the image in recovered proportion. In one modality, the minimum and maximum F(YR) values are included in the JPEG-HDR image data as metadata, which is accessible by the JPEG decoder.
[0083] In one embodiment, the function F can be selected so that it minimizes M(YR', YR), where M denotes a metric that measures the difference between YR' and YR according to some quality criterion, such as : the mean square error, signal-to-noise ratio (SNR) or peak signal-to-noise ratio (PSNR). M (for example, the MSE between the two images) represents an objective function for the optimization process of F. F can be a parametric function or it can be defined by means of a lookup table (LUT). Given M, well-known optimization techniques can be applied to determine F, such as the Nelder-Mead method described in "A simplex method for function minimization", by J. A Nelder, John and R. Mead, Computer Journal, N No. 7 on pages 308 to 313, 1965.
[0084] In one embodiment, the JPEG-HDR header may include a decoding LUT representing the F-1 inverse encoding function. A compatible JPEG-HDR decoder can use LUT to convert the received aspect ratio image from 8-bit data to higher precision Y-channel data (eg, floating point). The LUT can have 256 entries that map 8-bit data directly to floating point values. METHODS BASED ON EXAMPLE HISTOGRAM EQUALIZATION
[0085] A modality refers to computational efficiency in which histogram equalization or histogram equalization with limited contrast provides a process to derive the F function as well. The histogram equalization process converts a source luminance that has an arbitrary distribution into a uniform histogram luminance so that the image in proportion can be encoded more effectively. In a mode that uses histogram equalization, F can be computed as described below. a) Compute hist, the histogram of YR. A histogram simply denotes the number of instances (eg hist) at which the value i is found in the image in proportion; b) Compute c_hist, the cumulative histogram of hist. For example, the cumulative histogram can be computed as:
c) Determine F by normalization and scaling of c_hist. For example: Fi = ( (c_histi- min(c_hist))/(max(c_hist) - min(c_hist)) * scale, where the variable scale determines the maximum value of F, for example, 255.
[0086] The encoding function F computed as above may have areas with derivative or infinite slope, so F may not provide a unique mapping and the inverse function F-1 does not exist. Limiting the slope or derivative of F allows a modality to guarantee the uniqueness of the mapping provided by F and the existence of F".
[0087] The histogram equalization approach makes coding accuracy provide the frequency of occurrence of luminance value. Thus, less frequently occurring luminance values can be quantized with a larger error and frequently occurring luminance values are quantized with a smaller error. JPEG-HDR SAMPLE CUSTOM COLOR HUNT SUPPORT
[0088] A typical single image file format might use ICC (International Color Consortium) or WCS (Windows Color Management System) profiles to communicate color information to the rendering device (eg a viewer). ICC profiles and WCS profiles require that an image be rendered to a specific color space. As a part of rendering, all colors that are not representable in the target color space must be hue-mapped to representable colors. As a result of this hue mapping, some of the color information may be lost in the rendered image.
[0089] For example, an image can be captured by a sophisticated wide color hue camera or it can be created using computer graphics (CG) software. The resulting image can then be rendered to the sRGB color space. The sRGB color space is the most common color space and is supported by most display device operating systems. However, as the sRGB color space has relatively small color hue, all colors in the image that are not covered by sRGB need to be mapped to sRGB colors. If an sRGB image is then sent to an imaging device with a much wider color hue, then there is no reliable way to retrieve the original wider hue-mapped colors. Thus, hue mapping can cause irreversible information loss and can result in suboptimal color reproduction.
[0090] Another aspect of image rendering is related to specifying viewing conditions. For example, home and office viewing conditions are typically different from the viewing conditions used in color matching or color grading environments. The ICC workflow specifies exact viewing conditions (VC) making the workflow inflexible. WCS allows for some VC flexibility, but once an image is rendered, it's virtually impossible to reverse the change.
[0091] Both hue mapping and VC define a set of rendering decisions that a content creator should make based on assumptions about how an image will be viewed. In real life, it's impossible to make optimal rendering decisions for every possible use case and target imaging device as well as every possible purpose.
[0092] In one embodiment, the JPEG-HDR file format allows for two separate sets of metadata related to color hue: one set related to the original capture device or HDR data and the other set related to target legacy workflows that use color images. Thus, legacy imaging devices with standard color hue and dynamic range can still show a standard rendered image based on conventional ICC and WCS workflows to deliver color-accurate image content. At the same time, devices that support higher dynamic range of wider hue and/or point-in-time rendering may also be able to retrieve the original image data for dynamic rendering that takes into account both viewing conditions and properties of device, For example, an application can retrieve the original scene data and render it based on the current viewing conditions and characteristics of the target display device. Thus, a base image can provide backward compatibility with existing color management workflows, while JPEG-HDR metadata allows for precise and flexible point-in-time rendering.
[0093] A JPEG-HDR image contains a base image (eg a baseline JPEG image) and HDR metadata (eg an image in aspect ratio and residual color data). The base image is a hue-mapped and tone-mapped rendered image, typically rendered in the sRGB color space. The JPEG container can either indicate the color space of the base image or can include an ICCAVCS color profile that enables consistent color reproduction on a variety of imaging devices.
[0094] HDR metadata may also include color space information either in a device-independent space, such as XYZ primaries, or in an attached second ICCAVCS color profile. The HDR metadata color space can be different from the base image color space. The metadata color array is typically greater than the base image color hue. For example, the metadata color space for cameras typically matches the color space for camera sensors. For CG images, the metadata color space can include all colors presented in the original image. Thus, a modality provides enhanced wide color hue support in JPEG-HDR with the use of two or more color space descriptors, eg, profiles. One profile defines the base image encoding color space and the second profile defines the HDR metadata encoding color space.
[0095] FIG. 8A depicts an encoding process that supports dual color spaces according to an example embodiment. As shown in FIG. 8A, the input HDR image 805, captured in the B color space, can be tone-mapped by the TMO process 810 to generate a tone-mapped image 815, in the B color space. 820 hue transform to generate an 825 base image in color space A. Using the information about the two color spaces, you can create a TAB color transform to transform images from color space A into color space B The TAB transform can be applied to the base 825 image in the 840 color transform step to create an 845 base image in the B color space.
[0096] With the use of the original HDR image 805 and the base image 845, the process 830 can generate the metadata of HDR 835 according to the methods described earlier in this invention. Finally, the 825 image (in A color space) and HDR 835 metadata (in B color space) can be encoded and combined to generate a JPEG-HDR (855) image. The JPEG-HDR 855 image file format can include appropriate color descriptors for both color spaces. In some embodiments, processing steps 810 and 820 can be combined into a single step, where given an HDR image (805) in color space B, it outputs a tone-mapped image in color space A (825) . With the use of additive color spaces, such a TRC (tone reproduction curve) per matrix allows 810 and 820 combining steps during coding, as both hue mapping and hue mapping can be done in the color space. original (eg B). In addition, color transformations between color spaces become more accurate and computationally more efficient.
[0097] FIG. 8B depicts a decoding process that supports double color hues according to an example embodiment. As shown in FIG. 8B, given an input JPEG-HDR image, which defines data in two color spaces: a base image in color space A and the HDR metadata in color space B, a base decoder extracts base image 865, in space of color A (ie sRGB). Image 865 can be used to display the base image on legacy display devices with standard dynamic range.
[0098] Using information about the two color spaces, a TAB color transform can be created to transform images from color space A to color space B. The TAB transform can be applied to base image 865 in the step of 870 color transformation to create a base 875 image in B color space. Given the 855 input, the 890 metadata decoding process extracts the metadata from 895 HDR into B color space. Finally, the 880 HDR decoder can match the base 875 image and the 895 metadata to generate an HDR 885 image in the B color space.
[0099] If the HDR metadata is in a wide color space that encompasses all possible colors of an image, the encoded image values will always be positive. Positive values allow for validation of images during the encoding and decoding stages. That is, if negative values are detected, these values can be reset and/or an error message can be issued. The methods described in this document can also be applied to encoding standard input dynamic range (SDR) images with broader color hue than conventional SDR images. For input SDR images (eg, 805), the TMO processing step (810) can be omitted.
[00100] The 885 image can be subsequently rendered to the target imaging device for specific and current viewing conditions. Standard displays, HDR displays, wide hue displays, and printers are examples of target imaging devices. Brightly lit, neutral colored, and dimly lit environments are all examples of different viewing conditions.
[00101] An exemplary embodiment of the present invention is thus described in relation to encoding HDR pictures. Log luminances in an input HDR image are histogrammed to generate a per-tone map, along with which a global log tone-mapped luminance image is computed. The global log tone-mapped luminance image is scaled down. The log luminances and the log global tone-mapped luminance image generate a log ratio image. Multi-scale resolution filtering of the log-ratio image generates a log-ratio image of multiple scales. The multiple scale ratio image log and log luminances generate a second log tone-mapped image, which is normalized to output a tone-mapped image based on the scaled-down log global tone-mapped luminance image and the normalized image. The input HDR image and the output tone-mapped image generate a second image in aspect ratio, which is quantized. The quantized base image and the base aspect ratio image can be output, for example, to a JPEG encoder for compression in the JPEG format. SAMPLE JPEG-HDR ENCODING WITH MULTI-SCALE RATIO IMAGE FORMATION
[00102] In one embodiment, the additional image metadata comprises an image in grayscale ratio of multiple local scales, which is derived from the original HDR image. One modality uses a color hue, such as the YCC hue extended with the image format in this document to enable full recovery at each pixel in the HDR version of the original HDR image, as generated/restored from the tone-mapped image and the image in grayscale proportion of multiple local scales. In one embodiment, techniques as described in this document minimize the number of completely black tone-mapped values in the tone-mapped image below a threshold (eg, 0.01%, 0.1%, 1%, 2% , etc. of the total number of pixels in the tone-mapped image) to enable full recovery at each pixel in the HDR version of the original HDR image.
[00103] Under the techniques of this document, instead of using a global tone mapping (TM) operator that compresses the global contrast in order to fit the desired output range of luminance values and loses local contrast that matters to the In human visual perception, tone mapping processing at multiple local scales can be used to generate the tone-mapped image that improves local contrast that might have been compromised in a global TM operator, while leaving the overall mapping as it is. In one embodiment, TM processing at multiple local scales uses a global curve (for example, a histogram fitting TM curve) to map luminance values without loss of detail. In one embodiment, TM processing at multiple local scales is performed effectively without generating/introducing new artifacts (such as halos) in the process. In a particular embodiment, efficient recursive processing is deployed to perform processing at multiple local scales as described in this document with high computational efficiency. In a particular possible modality, processing at multiple local scales takes only 30% more time than TM processing by a global TM operator.
[00104] Under the techniques of this document, instead of using a global tone mapping (TM) operator that compresses the global contrast in order to fit the desired output range of luminance values and loses local contrast that matters to the In human visual perception, tone mapping processing at multiple local scales can be used to generate the tone-mapped image that improves local contrast that might have been compromised in a global TM operator, while leaving the overall mapping as it is. In one embodiment, TM processing at multiple local scales uses a global curve (for example, a histogram fitting TM curve) to map luminance values without loss of detail. In one embodiment, TM processing at multiple local scales is performed effectively without generating/introducing new artifacts (such as halos) in the process. In a particular embodiment, efficient recursive processing is deployed to perform processing at multiple local scales as described in this document with high computational efficiency. In a particular possible modality, processing at multiple local scales takes only 30% more time than TM processing by a global TM operator.
[00105] In one embodiment, an input HDR image is loaded and its luminance values are converted into the logarithmic domain. A histogram fit TM curve is computed and applied to the luminance values to determine a grayscale image in global proportion. As used herein, a proportioned image generally refers to an image comprising proportional values between luminance values in a pre-tone mapping image (eg, an input HDR image or its logarithmic equivalent) and luminance values in a post-tone mapping image (for example, a tone-mapped image or its logarithmic equivalent). In one embodiment, the aspect ratio image is logically represented as the pre-tone mapping image divided by the post-tone mapping image at each pixel location in the non-logarithmic domain or equivalently as the pre-tone mapping image minus the post-tone mapping at each pixel location in the logarithmic domain. In some other embodiment, the aspect ratio image is logically represented as the post-tone mapping image divided by the pre-tone mapping image at each pixel location in the non-logarithmic domain or equivalently as the post-tone mapping image minus the image of pre-tone mapping at each pixel location in the logarithmic domain. In all of these modalities, it should be noted that the pre-tone mapping image (eg a TM image at multiple local scales) can be obtained through simple algebraic operations (eg multiplications/divisions in the non-log domain ; additions/subtractions in the logarithmic domain) if the aspect ratio image (eg a TM image at multiple local scales) and the pre-tone mapping image (eg an input HDR image) are known.
[00106] In one modality, in the logarithmic domain, the global proportion image that is used to generate other proportioned images to be proportionately merged at multiple local scales is efficiently computed by subtracting using 16-bit integer quantities . In one embodiment, a maximum reference on a tone-mapped image can be computed and the tone-mapped image can be modified so that no more than a small percentage of pixels fall outside of a supported color hue (for example, an extended YCC color hue).
[00107] In one modality, in the logarithmic domain, the global proportion image that is used to generate other proportioned images to be proportionately merged at multiple local scales is efficiently computed by subtracting using 16-bit integer quantities . In one embodiment, a maximum reference on a tone-mapped image can be computed and the tone-mapped image can be modified so that no more than a small percentage of pixels fall outside of a supported color hue (for example, an extended YCC color hue). EQUIVALENTS, EXTENSIONS, ALTERNATIVES AND GENERAL PROVISIONS
[00108] In the above descriptive report, an embodiment of the invention was described with reference to numerous specific details that may vary from deployment to deployment. Thus, the sole and exclusive indicator of what the invention is and what is intended by applicants to be the invention is the set of embodiments that result from this application, in the specific form in which such embodiments result, including any subsequent correction. Any definitions expressly provided herein for terms contained in such embodiments shall address the meaning of such terms as used in the embodiments. Therefore, no limitation, element, property, feature, advantage or attribute that is not expressly cited in an embodiment shall limit the scope of such an embodiment in any way. The descriptive report and drawings should therefore be understood in an illustrative rather than a restrictive sense.
权利要求:
Claims (24)
[0001]
1. A method for encoding a high dynamic range (HDR) image with a processor, the method characterized in that it comprises the steps of: computing a histogram of logarithmic luminance (log) pixel values in the HDR image; generate a tone-mapped curve based on the histogram; compute a global log tone-mapped luminance image based on the log luminance pixel values in the HDR image and the tone-mapped curve; compute a scaled-down log global tone-mapped luminance image based on the log global tone-mapped luminance image; compute a log-ratio image based on the log luminance pixel values in the HDR image and the log global tone-mapped luminance image; perform resolution filtering at multiple scales for the log ratio image to generate a log ratio image at multiple scales; generating a second log tone-mapped image based on the ratio image at multiple log scales and the log luminance pixel values in the HDR image; normalizing the second log tone-mapped image to generate an output tone-mapped image based on the scaled-down global log tone-mapped luminance image and the second log tone-mapped image; generate a second image in aspect ratio based on the input HDR image and the output tone-mapped image; and quantizing the second image in proportion to generate a second image in quantized proportion.
[0002]
2. Method according to claim 1, characterized in that the second image in quantized ratio and the output tone-mapped image are provided to an encoder to generate a JPEG-HDR image.
[0003]
3. Method according to claim 1, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises reducing the scale of the image in log proportion by a factor N to generate an image in proportion first-level downscaled log, where N is a positive integer.
[0004]
4. Method according to claim 3, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises reducing the scale of the image in proportion to log level scaled down by the factor N to generate a second-level scaled-down log-ratio image.
[0005]
5. Method according to claim 4, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises reducing the scale of the image in log proportion in a reduced second level scale by the factor N to generate a third-level scaled-down log-ratio image.
[0006]
6. Method according to claim 5, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises scaling pixel values of the image in log proportion in a reduced scale of third level with factors of third-level scale to generate a third-level log-weighted proportion image.
[0007]
7. Method according to claim 6, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises scaling pixel values of the image in log proportion in reduced scale of second level with factors of second-level scale to generate a second-level log-weighted proportion image.
[0008]
8. Method according to claim 7, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises increasing the scale of the image in log proportion in third-level scaled-up by factor N and add the second-level weighted log-ratio image to generate a second-level scaled-up log-ratio image.
[0009]
9. Method according to claim 8, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises scaling pixel values of the image in log proportion in first level reduced scale with factors of first-level scale to generate a first-level log-weighted ratio image.
[0010]
10. Method according to claim 9, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises increasing the scale of the image in log proportion in second level scaled up by the factor N and add the first-level log-weighted ratio image to generate a first-level scaled-up log-ratio image.
[0011]
11. Method according to claim 10, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises scaling pixel values of the image in log proportion with zero level scale factors to generate a zero-level log-weighted ratio image.
[0012]
12. Method according to claim 11, characterized in that performing a resolution filtering at multiple scales for the image in log proportion further comprises increasing the scale of the image in log proportion in first level scaled up by the factor N and add the zero-level weighted log aspect ratio image to generate the aspect ratio image at multiple log scales.
[0013]
13. Method according to claim 3, characterized in that the reduction of scale by factor N of the image in input log proportion comprises reducing a horizontal pixel resolution and a vertical pixel resolution of the log image by the factor N.
[0014]
14. Method according to claim 3, characterized in that the positive integer is equal to eight (8).
[0015]
15. Method according to claim 3, characterized in that it further comprises the low-pass filtering of an output of each of the scale reduction steps.
[0016]
16. Method according to claim 1, characterized in that the resolution filtering at multiple scales for performing the image in log proportion further comprises: reducing the image scale in log proportion by a factor N to generate an image in first-level scaled-down log proportion, where N comprises a positive integer; scale the first-level downscaled log-ratio image by factor N to generate a second-level downscaled log-ratio image; downscale the second-level downscaled log-ratio image by the factor N to generate a third-level downscaled log-ratio image; scaling third-level reduced-scale log-ratio image pixel values with third-level scale factors to generate a third-level log-weighted ratio image; scaling second-level scaled log-ratio image pixel values with second-level scale factors to generate a second-level log-weighted ratio image; scale the third-level log-weighted ratio image by the N factor and add the second-level log-weighted log ratio image to generate a second-level augmented-scale log-ratio image; scale first-level reduced-scale log-ratio image pixel values with first-level scaling factors to generate a first-level log-weighted log-ratio image; scale the second-level scaled-up log proportion image by factor N and add the first-level scaled-up log proportion image to generate a first-level scaled-up log proportion image; scale image pixel values in log aspect ratio with zero scale scale factors to generate a zero-level weighted log aspect image; and scale the image in log proportion at first level scaled up by N factor and add the image in log proportion weighted from zero level to generate the image in proportion at multiple log scales.
[0017]
17. Method according to claim 16, characterized in that it further comprises the low-pass filtering of an output of each of the scale reduction steps.
[0018]
18. Method according to claim 1, characterized in that it further comprises normalizing the computed histogram.
[0019]
19. Method according to claim 18, characterized in that the histogram normalization comprises adaptive histogram equalization with limited contrast (CLAHE).
[0020]
20. Method according to claim 19, characterized in that the histogram normalization comprises map normalization by histogram CLAHE tone (HCTN).
[0021]
21. Integrated circuit device (IC) characterized by the fact that it comprises: a semiconductor matrix; and an array of active devices arranged in semiconductor arrays, which are structurally arranged, configured to comprise: a tone mapper that functions to: compute a histogram of logarithmic luminescence (log) pixel values in an input high dynamic range image (HDR); generate a tone-mapped curve based on the histogram; compute a global log tone-mapped luminance image based on the log luminance pixel values in the HDR image and the tone-mapped curve; compute a scaled-down log global tone-mapped luminance image based on the log global tone-mapped luminance image; and computing a log ratio image based on the log luminance pixel values in the HDR image and the log global tone-mapped luminance image; a multi-scale filter, which functions to: perform multi-scale resolution filtering for the log-ratio image to generate a log-ratio multi-scale image; generating a second log tone-mapped image based on the ratio image at multiple log scales and the log luminance pixel values in the HDR image; normalizing the second log tone-mapped image to generate an output tone-mapped image based on the scaled-down global log tone-mapped luminance image and the second log tone-mapped image; and generating a second image in aspect ratio based on the input HDR image and the output tone-mapped image; and a quantizer, which quantizes the second image in proportion to generate a second image in quantized proportion.
[0022]
22. Integrated circuit device according to claim 21, characterized in that the resolution filtering at multiple scales for the realization of the image in log proportion further comprises: reducing the image scale in log proportion by a factor N to generate a first-level reduced-scale log-ratio image, where N is a positive integer; scale the first-level downscaled log-ratio image by factor N to generate a second-level downscaled log-ratio image; downscale the second-level downscaled log-ratio image by the factor N to generate a third-level downscaled log-ratio image; scaling third-level downscaled log-ratio image pixel values with third-level scale factors to generate a third-level log-weighted ratio image; scaling second-level scaled log-ratio image pixel values with second-level scale factors to generate a second-level log-weighted ratio image; scale the third-level log-weighted ratio image by the N factor and add the second-level log-weighted log ratio image to generate a second-level augmented-scale log-ratio image; scale first-level scaled-down log-ratio image pixel values with first-level scaling factors to generate a first-level log-weighted ratio image; scale the second-level scaled-up log proportion image by factor N and add the first-level scaled-up log proportion image to generate a first-level scaled-up log proportion image; scaling log-ratio image pixel values with zero-level scale factors to generate a zero-level log-weighted log-ratio image; and scale the image in log proportion at first level scaled up by N factor and add the image in log proportion weighted from zero level to generate the image in proportion at multiple log scales.
[0023]
23. Non-transient processor-readable storage medium, characterized in that a method that, when executed on a processor, causes, controls, or configures the processor to perform or control a process to encode a high dynamic range (HDR) image , wherein the process comprises the steps of: computing a histogram of logarithmic (log) luminance pixel values in the HDR image; generate a tone-mapped curve based on the histogram; compute a global log tone-mapped luminance image based on the log luminance pixel values in the HDR image and the tone-mapped curve; compute a scaled-down log global tone-mapped luminance image based on the log global tone-mapped luminance image; compute a log-ratio image based on the log luminance pixel values in the HDR image and the log global tone-mapped luminance image; perform multi-scale resolution filtering for the log-ratio image to generate a log-ratio multi-scale image; generating a second log tone-mapped image based on the ratio image at multiple log scales and the log luminance pixel values in the HDR image; normalizing the second log tone-mapped image to generate an output tone-mapped image based on the scaled-down global log tone-mapped luminance image and the second log tone-mapped image; generate a second image in aspect ratio based on the input HDR image and the output tone-mapped image; and quantizing the second image in proportion to generate a second image in quantized proportion.
[0024]
24. Non-transient processor-readable storage medium according to claim 23, characterized in that resolution filtering at multiple scales for image performance in log ratio further comprises: reducing image scale in log ratio per a factor N to generate a first-level scaled-down log-ratio image, where N is a positive integer; scale the first-level downscaled log-ratio image by factor N to generate a second-level downscaled log-ratio image; downscale the second-level downscaled log-ratio image by the factor N to generate a third-level downscaled log-ratio image; scaling third-level downscaled log-ratio image pixel values with third-level scale factors to generate a third-level log-weighted ratio image; scaling second-level scaled log-ratio image pixel values with second-level scale factors to generate a second-level log-weighted ratio image; scale the third-level log-weighted ratio image by the N factor and add the second-level log-weighted log ratio image to generate a second-level augmented-scale log-ratio image; scale first-level scaled-down log-ratio image pixel values with first-level scaling factors to generate a first-level log-weighted ratio image; scale the second-level scaled-up log proportion image by factor N and add the first-level scaled-up log proportion image to generate a first-level scaled-up log proportion image; scaling log-ratio image pixel values with zero-level scale factors to generate a zero-level log-weighted log-ratio image; and scale the image in log proportion at first level scaled up by N factor and add the image in log proportion weighted from zero level to generate the image in proportion at multiple log scales.
类似技术:
公开号 | 公开日 | 专利标题
BR112014008513B1|2021-08-17|METHOD FOR ENCODING AN HDR IMAGE, INTEGRATED CIRCUIT DEVICE AND NON TRANSIENT PROCESSOR READIBLE STORAGE MEANS
JP6039763B2|2016-12-07|Method, apparatus and storage medium for local tone mapping
TWI521973B|2016-02-11|Encoding, decoding, and representing high dynamic range images
JP5180344B2|2013-04-10|Apparatus and method for decoding high dynamic range image data, viewer capable of processing display image, and display apparatus
RU2736103C2|2020-11-11|Re-shaping of signals for wide dynamic range signals
KR20120112709A|2012-10-11|High dynamic range image generation and rendering
Deever et al.2013|Digital camera image formation: Processing and storage
CN110770787A|2020-02-07|Efficient end-to-end single-layer reverse display management coding
WO2021093980A1|2021-05-20|Device and method for pre-processing image data for a computer vision application
同族专利:
公开号 | 公开日
RU2014114631A|2015-10-20|
KR20140038566A|2014-03-28|
US9076224B1|2015-07-07|
US9374589B2|2016-06-21|
EP2748792A1|2014-07-02|
BR112014008513A2|2017-04-18|
EP3168809A1|2017-05-17|
CN105787908A|2016-07-20|
KR101448494B1|2014-10-15|
KR101970122B1|2019-04-19|
JP5747136B2|2015-07-08|
KR20150029606A|2015-03-18|
JP6255063B2|2017-12-27|
US9467704B2|2016-10-11|
US20160205405A1|2016-07-14|
CN103843032A|2014-06-04|
JP2015508589A|2015-03-19|
US20150249832A1|2015-09-03|
CN105787909A|2016-07-20|
CN103843032B|2016-04-20|
HK1221544A1|2017-06-02|
WO2014025588A1|2014-02-13|
CN105787909B|2018-07-20|
RU2580093C2|2016-04-10|
JP2015172956A|2015-10-01|
JP5965025B2|2016-08-03|
US20150206295A1|2015-07-23|
CN105787908B|2019-05-14|
EP2748792B1|2016-12-21|
JP2016197430A|2016-11-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5347374A|1993-11-05|1994-09-13|Xerox Corporation|Cascaded image processing using histogram prediction|
JPH0851542A|1994-05-31|1996-02-20|Fuji Xerox Co Ltd|Image processing method and device therefor|
US6108047A|1997-10-28|2000-08-22|Stream Machine Company|Variable-size spatial and temporal video scaler|
US6829301B1|1998-01-16|2004-12-07|Sarnoff Corporation|Enhanced MPEG information distribution apparatus and method|
US6348929B1|1998-01-16|2002-02-19|Intel Corporation|Scaling algorithm and architecture for integer scaling in video|
EP1126410A1|2000-02-14|2001-08-22|Koninklijke Philips Electronics N.V.|Picture signal enhancement|
US6778691B1|2000-05-16|2004-08-17|Eastman Kodak Company|Method of automatically determining tone-scale parameters for a digital image|
US6735330B1|2000-10-17|2004-05-11|Eastman Kodak Company|Automatic digital radiographic bright light|
US7492375B2|2003-11-14|2009-02-17|Microsoft Corporation|High dynamic range image viewing on low dynamic range displays|
KR100520970B1|2003-12-30|2005-10-17|인벤텍 어플라이언시스 코퍼레이션|Method and apparatus for transforming a high dynamic range image into a low dynamic range image|
US8218625B2|2004-04-23|2012-07-10|Dolby Laboratories Licensing Corporation|Encoding, decoding and representing high dynamic range images|
US7433514B2|2005-07-13|2008-10-07|Canon Kabushiki Kaisha|Tone mapping of high dynamic range images|
US7565018B2|2005-08-12|2009-07-21|Microsoft Corporation|Adaptive coding and decoding of wide-range coefficients|
ES2551561T3|2006-01-23|2015-11-19|MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V.|High dynamic range codecs|
US8014445B2|2006-02-24|2011-09-06|Sharp Laboratories Of America, Inc.|Methods and systems for high dynamic range video coding|
US7639893B2|2006-05-17|2009-12-29|Xerox Corporation|Histogram adjustment for high dynamic range image mapping|
US8687087B2|2006-08-29|2014-04-01|Csr Technology Inc.|Digital camera with selectively increased dynamic range by control of parameters during image acquisition|
US8320460B2|2006-09-18|2012-11-27|Freescale, Semiconductor, Inc.|Dyadic spatial re-sampling filters for inter-layer texture predictions in scalable image processing|
US20080089602A1|2006-10-17|2008-04-17|Eastman Kodak Company|Advanced automatic digital radiographic hot light method and apparatus|
CA2570090C|2006-12-06|2014-08-19|Brightside Technologies Inc.|Representing and reconstructing high dynamic range images|
US7760949B2|2007-02-08|2010-07-20|Sharp Laboratories Of America, Inc.|Methods and systems for coding multiple dynamic range images|
US8014027B1|2007-03-21|2011-09-06|Adobe Systems Incorporated|Automatic selection of color conversion method using image state information|
US8150199B2|2007-03-29|2012-04-03|Texas Instruments Incorporated|Methods and apparatus for image enhancement|
US7899267B2|2007-05-23|2011-03-01|Zoran Corporation|Dynamic range compensation by filter cascade|
US8135230B2|2007-07-30|2012-03-13|Dolby Laboratories Licensing Corporation|Enhancing dynamic ranges of images|
CN101420531A|2007-10-23|2009-04-29|鸿富锦精密工业(深圳)有限公司|High dynamic range photo acquisition apparatus and method|
JP2009118297A|2007-11-08|2009-05-28|Dainippon Printing Co Ltd|Quantization method and quantization device for specimen value|
JP2009200743A|2008-02-20|2009-09-03|Ricoh Co Ltd|Image processor, image processing method, image processing program and imaging apparatus|
TWI363311B|2008-05-15|2012-05-01|Silicon Motion Inc|Method and device for scaling up or scaling down images with the same hardware|
EP2144444B1|2008-07-10|2012-06-27|The University Of Warwick|HDR video data compression devices and methods|
US8237807B2|2008-07-24|2012-08-07|Apple Inc.|Image capturing device with touch screen for adjusting camera settings|
US7844174B2|2008-07-31|2010-11-30|Fuji Xerox Co., Ltd.|System and method for manual selection of multiple evaluation points for camera control|
EP2579208B1|2008-10-14|2015-05-06|Dolby Laboratories Licensing Corporation|Backlight simulation at reduced resolutions to determine spatial modulation of light for high dynamic range images|
KR101520068B1|2008-12-16|2015-05-13|삼성전자 주식회사|Apparatus and method of blending multiple image|
US8339475B2|2008-12-19|2012-12-25|Qualcomm Incorporated|High dynamic range image combining|
KR101520069B1|2008-12-26|2015-05-21|삼성전자 주식회사|Apparatus for processing image based on region of interest and method thereof|
US8363131B2|2009-01-15|2013-01-29|Aptina Imaging Corporation|Apparatus and method for local contrast enhanced tone mapping|
WO2010105036A1|2009-03-13|2010-09-16|Dolby Laboratories Licensing Corporation|Layered compression of high dynamic range, visual dynamic range, and wide color gamut video|
US8570396B2|2009-04-23|2013-10-29|Csr Technology Inc.|Multiple exposure high dynamic range image capture|
JP5052569B2|2009-06-25|2012-10-17|シャープ株式会社|Image compression apparatus, image compression method, image expansion apparatus, image expansion method, image forming apparatus, computer program, and recording medium|
JP2011010108A|2009-06-26|2011-01-13|Seiko Epson Corp|Imaging control apparatus, imaging apparatus, and imaging control method|
US8345975B2|2009-06-29|2013-01-01|Thomson Licensing|Automatic exposure estimation for HDR images based on image statistics|
JP2011028345A|2009-07-22|2011-02-10|Olympus Imaging Corp|Condition change device, camera, mobile apparatus and program|
JP2011077797A|2009-09-30|2011-04-14|Sony Corp|Image processor, imaging device, image processing method, and program|
US8558849B2|2009-12-14|2013-10-15|Samsung Electronics Co., Ltd.|Method and apparatus for processing a user interface in an image processor|
US20110157089A1|2009-12-28|2011-06-30|Nokia Corporation|Method and apparatus for managing image exposure setting in a touch screen device|
US8885978B2|2010-07-05|2014-11-11|Apple Inc.|Operating a device to capture high dynamic range images|
JP2012029029A|2010-07-23|2012-02-09|Seiko Epson Corp|Image processing device, image processing method and imaging device|
CN101951510B|2010-07-26|2012-07-11|武汉大学|High dynamic range compression method based on multiscale DoG filter|
US20120120277A1|2010-11-16|2012-05-17|Apple Inc.|Multi-point Touch Focus|
KR101538296B1|2011-03-02|2015-07-29|돌비 레버러토리즈 라이쎈싱 코오포레이션|Local multiscale tone-mapping operator|
WO2012142285A2|2011-04-12|2012-10-18|Dolby Laboratories Licensing Corporation|Quality assessment for images that have extended dynamic ranges or wide color gamuts|
US9036042B2|2011-04-15|2015-05-19|Dolby Laboratories Licensing Corporation|Encoding, decoding, and representing high dynamic range images|
TWI580275B|2011-04-15|2017-04-21|杜比實驗室特許公司|Encoding, decoding, and representing high dynamic range images|
CN102436640A|2011-09-21|2012-05-02|北京航空航天大学|Foggy-day image sharpening method of multi-scale Retinex model based on HIS space|
JP6382805B2|2012-07-13|2018-08-29|コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V.|Improved HDR image encoding and decoding method and apparatus|US9485495B2|2010-08-09|2016-11-01|Qualcomm Incorporated|Autofocus for stereo images|
KR101538296B1|2011-03-02|2015-07-29|돌비 레버러토리즈 라이쎈싱 코오포레이션|Local multiscale tone-mapping operator|
US9438889B2|2011-09-21|2016-09-06|Qualcomm Incorporated|System and method for improving methods of manufacturing stereoscopic image sensors|
US9398264B2|2012-10-19|2016-07-19|Qualcomm Incorporated|Multi-camera system using folded optics|
CN105340272B|2013-06-10|2018-11-06|汤姆逊许可公司|Coding and decoding methods and corresponding encoder and decoder|
US10178373B2|2013-08-16|2019-01-08|Qualcomm Incorporated|Stereo yaw correction using autofocus feedback|
US9275445B2|2013-08-26|2016-03-01|Disney Enterprises, Inc.|High dynamic range and tone mapping imaging techniques|
US8879858B1|2013-10-01|2014-11-04|Gopro, Inc.|Multi-channel bit packing engine|
WO2015078843A1|2013-11-27|2015-06-04|Thomson Licensing|Method and device for quantising the floating value of a pixel in an image|
KR20160102438A|2013-12-27|2016-08-30|톰슨 라이센싱|Method and device for tone-mapping a high dynamic range image|
TWI492187B|2014-02-17|2015-07-11|Delta Electronics Inc|Method and device for processing a super-resolution image|
EP3111644A1|2014-02-25|2017-01-04|Apple Inc.|Adaptive transfer function for video encoding and decoding|
TWI597658B|2014-03-10|2017-09-01|緯創資通股份有限公司|Electronic device, display system and image processing method|
US9383550B2|2014-04-04|2016-07-05|Qualcomm Incorporated|Auto-focus in low-profile folded optics multi-camera system|
US9374516B2|2014-04-04|2016-06-21|Qualcomm Incorporated|Auto-focus in low-profile folded optics multi-camera system|
US10013764B2|2014-06-19|2018-07-03|Qualcomm Incorporated|Local adaptive histogram equalization|
US9294672B2|2014-06-20|2016-03-22|Qualcomm Incorporated|Multi-camera system using folded optics free from parallax and tilt artifacts|
US9541740B2|2014-06-20|2017-01-10|Qualcomm Incorporated|Folded optic array camera using refractive prisms|
US9549107B2|2014-06-20|2017-01-17|Qualcomm Incorporated|Autofocus for folded optic array cameras|
US9819863B2|2014-06-20|2017-11-14|Qualcomm Incorporated|Wide field of view array camera for hemispheric and spherical imaging|
US9386222B2|2014-06-20|2016-07-05|Qualcomm Incorporated|Multi-camera system using folded optics free from parallax artifacts|
EP2961168A1|2014-06-27|2015-12-30|Thomson Licensing|Method and apparatus for predicting image samples for encoding or decoding|
US9613407B2|2014-07-03|2017-04-04|Dolby Laboratories Licensing Corporation|Display management for high dynamic range video|
US10277771B1|2014-08-21|2019-04-30|Oliver Markus Haynold|Floating-point camera|
EP3198556B1|2014-09-26|2018-05-16|Dolby Laboratories Licensing Corp.|Encoding and decoding perceptually-quantized video content|
US10225485B1|2014-10-12|2019-03-05|Oliver Markus Haynold|Method and apparatus for accelerated tonemapping|
US9832381B2|2014-10-31|2017-11-28|Qualcomm Incorporated|Optical image stabilization for thin cameras|
EP3026912A1|2014-11-27|2016-06-01|Thomson Licensing|Method and device for encoding and decoding a HDR picture and a LDR picture using illumination information|
TW201633779A|2014-12-16|2016-09-16|湯姆生特許公司|Method and device of converting a HDR version of a picture to a SDR version of said picture|
US9560330B2|2015-01-09|2017-01-31|Vixs Systems, Inc.|Dynamic range converter with reconfigurable architecture and methods for use therewith|
US9589313B2|2015-01-09|2017-03-07|Vixs Systems, Inc.|Dynamic range converter with pipelined architecture and methods for use therewith|
US9654755B2|2015-01-09|2017-05-16|Vixs Systems, Inc.|Dynamic range converter with logarithmic conversion and methods for use therewith|
US9860504B2|2015-01-09|2018-01-02|Vixs Systems, Inc.|Color gamut mapper for dynamic range conversion and methods for use therewith|
US9558538B2|2015-01-09|2017-01-31|Vixs Systems, Inc.|Dynamic range converter with frame by frame adaptation and methods for use therewith|
US9544560B2|2015-01-09|2017-01-10|Vixs Systems, Inc.|Dynamic range converter with generic architecture and methods for use therewith|
US9652870B2|2015-01-09|2017-05-16|Vixs Systems, Inc.|Tone mapper with filtering for dynamic range conversion and methods for use therewith|
PL3248367T3|2015-01-19|2018-12-31|Dolby Laboratories Licensing Corporation|Display management for high dynamic range video|
WO2016119979A1|2015-01-30|2016-08-04|Koninklijke Philips N.V.|Simple but versatile dynamic range coding|
EP3051818A1|2015-01-30|2016-08-03|Thomson Licensing|Method and device for decoding a color picture|
CN107409213B|2015-03-02|2020-10-30|杜比实验室特许公司|Content adaptive perceptual quantizer for high dynamic range images|
GB201506644D0|2015-04-20|2015-06-03|Univ Warwick|HDR compression method and application to viewers|
WO2016171510A1|2015-04-24|2016-10-27|엘지전자 주식회사|Broadcast signal transmitting/receiving method and device|
KR102322709B1|2015-04-29|2021-11-08|엘지디스플레이 주식회사|Image processing method, image processing circuit and display device using the same|
WO2016192937A1|2015-05-29|2016-12-08|Thomson Licensing|Methods, apparatus, and systems for hdr tone mapping operator|
US10165198B2|2015-06-02|2018-12-25|Samsung Electronics Co., Ltd.|Dual band adaptive tone mapping|
EP3323104B1|2015-07-16|2021-10-27|InterDigital Madison Patent Holdings, SAS|A method and device for tone-mapping a picture by using a parametric tone-adjustment function|
KR102309676B1|2015-07-24|2021-10-07|삼성전자주식회사|User adaptive image compensator|
US10885614B2|2015-08-19|2021-01-05|Samsung Electronics Co., Ltd.|Electronic device performing image conversion, and method thereof|
US9767543B2|2015-09-22|2017-09-19|Samsung Electronics Co., Ltd.|Method and apparatus for enhancing images via white pop-out|
US10043251B2|2015-10-09|2018-08-07|Stmicroelectronics Asia Pacific Pte Ltd|Enhanced tone mapper for high dynamic range images and video|
EP3369241B1|2015-10-28|2020-12-09|InterDigital VC Holdings, Inc.|Method and device for selecting a process to be applied on video data from a set of candidate processes driven by a common set of information data|
CN106878694B|2015-12-10|2018-12-18|瑞昱半导体股份有限公司|high dynamic range signal processing system and method|
EP3300363B1|2015-12-15|2018-08-29|Axis AB|A bit rate controller and a method for limiting output bit rate|
CN108370442B|2015-12-15|2020-02-14|华为技术有限公司|Method and device for processing high dynamic range image and computer readable storage medium|
CN105635525A|2015-12-23|2016-06-01|努比亚技术有限公司|Image detail processing method and image detail processing device|
CN105516674B|2015-12-24|2018-06-05|潮州响石数码技术有限公司|A kind of supervision equipment with HDR display functions|
KR20170082398A|2016-01-06|2017-07-14|삼성전자주식회사|Video content providing apparatus and control method thereof, and system|
EP3214600B1|2016-03-04|2019-02-06|Aptiv Technologies Limited|Method for processing high dynamic rangedata from a nonlinear camera|
US10701375B2|2016-03-23|2020-06-30|Dolby Laboratories Licensing Corporation|Encoding and decoding reversible production-quality single-layer video signals|
CN105894484B|2016-03-30|2017-03-08|山东大学|A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation|
CN105933617B|2016-05-19|2018-08-21|中国人民解放军装备学院|A kind of high dynamic range images fusion method for overcoming dynamic problem to influence|
EP3249605A1|2016-05-23|2017-11-29|Thomson Licensing|Inverse tone mapping method and corresponding device|
US10922796B2|2016-07-11|2021-02-16|Tonetech Inc.|Method of presenting wide dynamic range images and a system employing same|
US10074162B2|2016-08-11|2018-09-11|Intel Corporation|Brightness control for spatially adaptive tone mapping of high dynamic rangeimages|
US10575028B2|2016-09-09|2020-02-25|Dolby Laboratories Licensing Corporation|Coding of high dynamic range video using segment-based reshaping|
GB2554669A|2016-09-30|2018-04-11|Apical Ltd|Image processing|
KR102349543B1|2016-11-22|2022-01-11|삼성전자주식회사|Eye-tracking method and apparatus and generating method of inverse transformed low light image|
US10218952B2|2016-11-28|2019-02-26|Microsoft Technology Licensing, Llc|Architecture for rendering high dynamic range video on enhanced dynamic range display devices|
GB2558000B|2016-12-21|2020-06-10|Apical Ltd|Display control|
US10176561B2|2017-01-27|2019-01-08|Microsoft Technology Licensing, Llc|Content-adaptive adjustments to tone mapping operations for high dynamic range content|
US10104334B2|2017-01-27|2018-10-16|Microsoft Technology Licensing, Llc|Content-adaptive adjustment of display device brightness levels when rendering high dynamic range content|
CN110337667A|2017-02-15|2019-10-15|杜比实验室特许公司|The tint ramp of high dynamic range images maps|
JP6866224B2|2017-05-09|2021-04-28|キヤノン株式会社|Image coding device, image decoding device, image coding method and program|
US11100888B2|2017-06-28|2021-08-24|The University Of British Columbia|Methods and apparatuses for tone mapping and inverse tone mapping|
CN107451974B|2017-07-31|2020-06-02|北京电子工程总体研究所|Self-adaptive reproduction display method for high dynamic range image|
US10504263B2|2017-08-01|2019-12-10|Samsung Electronics Co., Ltd.|Adaptive high dynamic rangetone mapping with overlay indication|
CN107403422B|2017-08-04|2020-03-27|上海兆芯集成电路有限公司|Method and system for enhancing image contrast|
CN107545871B|2017-09-30|2020-03-24|青岛海信电器股份有限公司|Image brightness processing method and device|
JPWO2019069483A1|2017-10-06|2020-09-17|パナソニックIpマネジメント株式会社|Video display device and video display method|
KR20190055418A|2017-11-15|2019-05-23|삼성전자주식회사|Apparatus for providing content, method for controlling thereof and recording media thereof|
KR20190056752A|2017-11-17|2019-05-27|삼성전자주식회사|Display apparatus, method for controlling the same and set top box|
WO2019118319A1|2017-12-15|2019-06-20|Gopro, Inc.|High dynamic range processing on spherical images|
CN108022223B|2017-12-18|2021-06-25|中山大学|Tone mapping method based on logarithm mapping function blocking processing fusion|
KR20190090288A|2018-01-24|2019-08-01|삼성전자주식회사|Electronic apparatus and controlling method of thereof|
CN108632505B|2018-03-21|2020-12-01|西安电子科技大学|High dynamic video processing system based on SoC FPGA|
US10546554B2|2018-03-26|2020-01-28|Dell Products, Lp|System and method for adaptive tone mapping for high dynamic ratio digital images|
CN108765304A|2018-04-08|2018-11-06|西安电子科技大学|High dynamic infrared image enhancing method based on self-adaption gradient gain control|
US10943335B2|2018-06-15|2021-03-09|Intel Corporation|Hybrid tone mapping for consistent tone reproduction of scenes in camera systems|
KR20200031470A|2018-09-14|2020-03-24|삼성전자주식회사|Electric device and control method thereof|
US10957024B2|2018-10-30|2021-03-23|Microsoft Technology Licensing, Llc|Real time tone mapping of high dynamic range image data at time of playback on a lower dynamic range display|
CN111294522A|2019-02-28|2020-06-16|北京展讯高科通信技术有限公司|HDR image imaging method, device and computer storage medium|
法律状态:
2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-04-07| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-06-29| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-08-17| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 31/07/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201261681061P| true| 2012-08-08|2012-08-08|
US61/681,061|2012-08-08|
PCT/US2013/053036|WO2014025588A1|2012-08-08|2013-07-31|Image processing for hdr images|
[返回顶部]